基于深度学习的图像修复方法研究综述

您所在的位置:网站首页 图像修复 论文范文 基于深度学习的图像修复方法研究综述

基于深度学习的图像修复方法研究综述

2024-07-12 21:20| 来源: 网络整理| 查看: 265

JAM J, KENDRICK C, WALKER K, et al. A comprehensivereview of past and present image inpainting methods[J]. Computer Vision and Image Understanding, 2021, 203: 103147.

RUMELHART D E, HINTON G E, WILLIAMS R J. Learning internal representations by error propagation[M]//Readings in Cognitive Science.Amsterdam: Elsevier, 1988: 399-421.

GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems. New York: ACM, 2014: 2672-2680.

VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all You need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems.New York: ACM, 2017: 6000-6010.

DHARIWAL P, NICHOL A. Diffusion models beat GANs on image synthesis[EB/OL]. (2021-06-01)[2023-05-25]. https://arxiv.org/abs/2105.05233https://arxiv.org/abs/2105.05233.

KÖHLER R, SCHULER C, SCHÖLKOPF B, et al. Mask-specific inpainting with deep neural networks[C]//German Conference on Pattern Recognition.Cham: Springer, 2014: 523-534.

REN J S, XU L, YAN Q, et al. Shepard convolutional neural networks[M]//Advances in Neural Information Processing Systems (NIPS). San Francisco: Morgan Kaufmann, 2015.

DAPOGNY A, CORD M, PÉREZ P. The missing data encoder: Cross-channel image completion with hide-and-seek adversarial network[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 10688-10695.

KOUTNÍK J, GREFF K, GOMEZ F, et al.A clockwork RNN[EB/OL]. (2014-02-14)[2023-05-25]. https://arxiv.org/abs/1402.3511https://arxiv.org/abs/1402.3511.

LECUN Y, BOSER B, DENKER J S, et al. Backpropagation applied to handwritten zip code recognition[J]. Neural Computation, 1989, 1(4): 541-551.

VAN OORD A, KALCHBRENNER N, KAVUKCUOGLU K. Pixel recurrent neural networks[C]//International Conference on Machine Learning. New York: PMLR, 2016: 1747-1756.

HOCHREITER S, SCHMIDHUBER J. Long short-term memory [J]. Neural Computation, 1997, 9(8): 1735-1780.

VAN DEN OORD A, KALCHBRENNER N, VINYALS O, et al. Conditional image generation with PixelCNN decoders[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems. New York: ACM, 2016: 4797-4805.

SALIMANS T, KARPATHY A, CHEN X, et al. PixelCNN++: Improving the PixelCNN with discretized logistic mixture likelihood and other modifications[EB/OL]. (2017-01-19)[2023-05-25]. https://arxiv.org/abs/1701.05517https://arxiv.org/abs/1701.05517.

OLIVEIRA M M, BOWEN B, MCKENNA R, et al. Fast digital image inpainting[C]//Proceedings of the International Conference on Visualization, Imaging and Image Processing(VIIP 2001), Marbella: [s.n.], 2001: 106-107.

HADHOUD M M, MOUSTAFA K A, SHENODA S Z. Digital images inpainting using modified convolution based method[C]//Proceedings SPIE 7340, Optical Pattern Recognition XX.Orlando: SPIE, 2009, 7340: 234-240.

JAIN V, SEUNG S. Natural image denoising with convolutional networks[C]//Advances in Neural Information Processing Systems. Spain: Curran Associates, Inc, 2008: 769-776.

YU J H, LIN Z, YANG J M, et al. Generative image inpainting with contextual attention[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City: IEEE, 2018: 5505-5514.

SAGONG M C, SHIN Y G, KIM S W, et al. PEPSI: Fast image inpainting with parallel decoding network[C]//2919 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Long Beach: IEEE, 2020: 11352-11360.

MA Y Q, LIU X L, BAI S H, et al. Coarse-to-fine image inpainting via region-wise convolutions and non-local correlation[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence. Macao: IJCAI, 2019: 3123-3129.

ZHANG H R, HU Z Z, LUO C Z, et al. Semantic image inpainting with progressive generative networks[C]//Proceedings of the 26th ACM International Conference on Multimedia. New York: ACM, 2018: 1939-1947.

LI J Y, WANG N, ZHANG L F, et al. Recurrent feature reasoning for image inpainting[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE, 2020: 7757-7765.

ZENG Y, LIN Z, YANG J M, et al. High-resolution image inpainting with iterative confidence feedback and guided upsampling[C]// European Conference on Computer Vision (ECCV). Cham: Springer, 2020: 1-17.

YANG C, LU X, LIN Z, et al. High-resolution image inpainting using multi-scale neural patch synthesis[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE, 2017: 4076-4084.

YI Z L, TANG Q, AZIZI S, et al. Contextual residual aggregation for ultrahigh-resolution image inpainting[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE, 2020: 7505-7514.

KULSHRESHTHA P, PUGH B, JIDDI S. Feature refinement to improve high resolution image inpainting[EB/OL]. (2002-06-29)[2023-05-25]. https://arxiv.org/abs/2206.13644https://arxiv.org/abs/2206.13644.

LIU W H, CUN X D, PUN C M, et al. CoordFill: Efficient high-resolution image inpainting via parameterized coordinate querying[EB/OL]. (2023-03-15)[2023-05-25]. https://arxiv.org/abs/2303.08524https://arxiv.org/abs/2303.08524.

LIAO L, HU R M, XIAO J, et al. Edge-aware context encoder for image inpainting[C]//2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Calgary: IEEE, 2018: 3156-3160.

NAZERI K, NG E, JOSEPH T, et al. EdgeConnect: Structure guided image inpainting using edge prediction[C]//2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). Seoul: IEEE, 2019: 3265-3274.

LI J Y, HE F X, ZHANG L F, et al. Progressive reconstruction of visual structure for image inpainting [C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). New York: IEEE, 2020: 5961-5970.

REN Y R, YU X M, ZHANG R N, et al. StructureFlow: Image inpainting via structure-aware appearance flow[C]// 2019 IEEE/CVF International Conference on Computer Vision (ICCV). New York: IEEE, 2020: 181-190.

DENG X C, YU Y. Ancient mural inpainting via structure information guided two-branch model[J]. Heritage Science, 2023, 11(1): 1-17.

YANG J E, QI Z Q, SHI Y. Learning to incorporate structure knowledge for image inpainting[J]. Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020, 34(7): 12605-12612.

SONG Y H, YANG C, SHEN Y J, et al. SPG-net: Segmentation prediction and guidance network for image inpainting[EB/OL]. (2018-08-06)[2023-05-25]. https://arxiv.org/abs/1805.03356https://arxiv.org/abs/1805.03356.

YU T, FENG R S, FENG R Y, et al. Inpaint anything: Segment anything meets image inpainting[EB/OL]. (2023-04-13)[2023-05-25]. https://arxiv.org/abs/2304.06790https://arxiv.org/abs/2304.06790.

LIU Y, PAN J S, SU Z X. Deep blind image inpainting[M]//CUI Z, PAN J S, ZHANG S S, et al., Eds. Intelligence Science and Big Data Engineering. Visual Data Engineering. Cham: Springer International Publishing, 2019: 128-141.

WANG Y, CHEN Y C, TAO X, et al. VCNET: A robust approach to blind image inpainting[C]//European Conference on Computer Vision.Cham: Springer, 2020: 752-768.

PHUTKE S S, KULKARNI A, VIPPARTHI S K, et al. Blind image inpainting via omni-dimensional gated attention and wavelet queries[C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops(CVPRW). Vancouver: IEEE, 2023: 1251-1260.

LIU G L, REDA F A, SHIH K J, et al. Catanzaro, Image inpainting for irregular holes using partial convolutions[C]//European Conference on Computer Vision (ECCV). Cham: Springer, 2018: 85-100.

CHEN M, ZHAO X D, XU D Q. Image inpainting for digital Dunhuang murals using partial convolutions and sliding window method[J]. Journal of Physics: Conference Series, 2019, 1302(3): 032040.

WANG N Y, WANG W L, HU W J, et al. Thanka mural inpainting based on multi-scale adaptive partial convolution and stroke-like mask[J]. IEEE Transactions on Image Processing, 2021, 30: 3720-3733.

YU J H, LIN Z, YANG J M, et al. Free-form image inpainting with gated convolution[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul: IEEE, 2019: 4470-4479.

CHANG Y L, LIU Z Y, LEE K Y, et al. Free-form video inpainting with 3D gated convolution and temporal PatchGAN[C]//2019 IEEE/CVF International Conference on Computer Vision(ICCV). Sesoul: IEEE, 2020: 9065-9074.

LI H A, WANG G Y, GAO K, et al. A gated convolution and self-attention-based pyramid image inpainting network[J]. Journal of Circuits, Systems and Computers, 2022, 31(12): 2250208.

XIE K, GAO L G, ZHANG H, et al. Inpainting truncated areas of CT images based on generative adversarial networks with gated convolution for radiotherapy[J]. Medical & Biological Engineering & Computing, 2023, 61(7): 1757-1772.

MA X X, DENG Y B, ZHANG L, et al. A novel generative image inpainting model with dense gated convolutional network[J]. International Journal of Computers Communications & Control, 2023, 18(2): 1-18.

XIE C H, LIU S H, LI C, et al. Image inpainting with learnable bidirectional attention maps[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul: IEEE, 2020: 8857-8866.

GUO X F, YANG H Y, HUANG D. Image inpainting via conditional texture and structure dual generation[C]//2021 IEEE/CVF International Conference on Computer Vision(ICCV). Montreal: IEEE, 2022: 14114-14123.

ZHAO S Y, CUI J, SHENG Y L, et al. Large scale image completion via co-modulated generative adversarial networks[EB/OL]. (2021-03-18)[2023-05-25]. https://arxiv.org/abs/2103.10428https://arxiv.org/abs/2103.10428.

ZHENG H T, LIN Z, LU J W, et al. Image inpainting with cascaded modulation GAN and object-aware training[C]//AVIDAN S, BROSTOW G, CISSÉ M, et al. European Conference on Computer Vision. Cham: Springer, 2022: 277-296.

VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all You need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000-6010.

WAN Z Y, ZHANG J B, CHEN D D, et al. High-fidelity pluralistic image completion with transformers[C]//2021 IEEE/CVF International Conference on Computer Vision(ICCV). Montreal: IEEE, 2022: 4672-4681.

ZHOU Y Q, BARNES C, SHECHTMAN E, et al. TransFill: Reference-guided image inpainting by merging multiple color and spatial transformations[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Nashville: IEEE, 2021: 2266-2267.

WANG J K, CHEN S X, WU Z X, et al. FT-TDR: Frequency-guided transformer and top-down refinement network for blind face inpainting[J]. IEEE Transactions on Multimedia, 2023, 25: 2382-2392.

DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words: Transformers for image recognition at scale[EB/OL]. (2021-06-03)[2023-05-25]. https://arxiv.org/abs/2010.11929https://arxiv.org/abs/2010.11929.

CAO C J, DONG Q L, FU Y M. Learning prior feature and attention enhanced image inpainting[C]//AVIDAN S, BROSTOW G, CISSÉ M, et al. European Conference on Computer Vision. Cham: Springer, 2022: 306-322.

YU Y S, DU D W, ZHANG L B, et al. Unbiased multi-modality guidance for image inpainting[C]//AVIDAN S, BROSTOW G, CISSÉ M, et al. European Conference on Computer Vision. Cham: Springer, 2022: 668-684.

LIU H P, WANG Y, WANG M, et al. Delving globally into texture and structure for image inpainting[C]//Proceedings of the 30th ACM International Conference on Multimedia. New York: ACM, 2022: 1270-1278.

CHEN B L, LIU T J, LIU K H. Lightweight image inpainting by stripe window transformer with joint attention to CNN[EB/OL]. (2023-01-02)[2023-05-25]. https://arxiv.org/abs/2301.00553https://arxiv.org/abs/2301.00553.

NADERI M, GIVKASHI M H, KARIMI N, et al. SFI-swin: Symmetric face inpainting with swin transformer by distinctly learning face components distributions[EB/OL]. (2023-01-09)[2023-05-25]. https://arxiv.org/abs/2301.03130https://arxiv.org/abs/2301.03130.

LIAO L, LIU T R, CHEN D L, et al. TransRef: Multi-scale reference embedding transformer for reference-guided image inpainting[EB/OL]. (2023-06-20)[2023-05-25]. https://arxiv.org/abs/2306.11528https://arxiv.org/abs/2306.11528.

DHARIWAL P, NICHOL A. Diffusion models beat GANs on image synthesis[EB/OL]. (2021-06-01)[2023-05-25]. https://arxiv.org/abs/2105.05233.pdfhttps://arxiv.org/abs/2105.05233.pdf.

HO J, JAIN A, ABBEEL P. Denoising diffusion probabilistic models[M]//Advances in neural Information Processing Systems.San Francisco: Margan Kaufmann, 2020.

LUGMAYR A, DANELLJAN M, ROMERO A, et al. RePaint: Inpainting using denoising diffusion probabilistic models[EB/OL]. (2022-08-31)[2023-07-25]. https://arxiv.org/abs/2201.09865https://arxiv.org/abs/2201.09865.

LI W B, YU X, ZHOU K, et al. SDM: Spatial diffusion model for large hole image inpainting[EB/OL]. (2023-03-08)[2023-07-25]. https://arxiv.org/abs/2212.02963https://arxiv.org/abs/2212.02963.

HORITA D, YANG J L, CHEN D, et al. A structure-guided diffusion model for large-hole diverse image completion[EB/OL]. (2022-11-18)[2023-07-25]. https://arxiv.org/abs/2211.10437https://arxiv.org/abs/2211.10437.

GRILL J B, STRUB F, ALTCHÉ F, et al. Bootstrap your own latent: A new approach to self-supervised Learning[EB/OL]. (2020-09-10)[2023-05-25]. https://arxiv.org/abs/2006.07733https://arxiv.org/abs/2006.07733.

ROMBACH R, BLATTMANN A, LORENZ D, et al. High-resolution image synthesis with latent diffusion models[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans: IEEE, 2022: 10674-10685.

ISKAKOV K. Semi-parametric image inpainting[EB/OL]. (2018-11-13)[2023-07-25]. https://arxiv.org/abs/1807.02855https://arxiv.org/abs/1807.02855.

XIONG W, YU J H, LIN Z, et al. Foreground-aware image inpainting[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2020: 5833-5841.

YUVAL N. Reading digits in natural images with unsupervised feature learning[C]//Proceedings of the NIPS Workshop on Deep Learning and Unsupervised Feature Learning.Granada: NIPS Foundation, 2011.

DOERSCH C, SINGH S, GUPTA A, et al. What makes Paris look like Paris?[J]. ACM Transactions on Graphics, 2012, 31(4): 1-9.

CORDTS M, OMRAN M, RAMOS S, et al. The cityscapes dataset for semantic urban scene understanding[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Las Vegas: IEEE, 2016: 3213-3223.

LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: Common objects in context[C]//European Conference on Computer Vision. Cham: Springer, 2014: 740-755.

RUSSAKOVSKY O, DENG J, SU H, et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3): 211-252.

ZHOU B L, LAPEDRIZA A, KHOSLA A, et al. Places: A 10 million image database for scene recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(6): 1452-1464.

LE V, BRANDT J, LIN Z, et al. Interactive facial feature localization[C]//European Conference on Computer Vision. Berlin, Heidelberg: Springer, 2012: 679-692.

LIU Z W, LUO P, WANG X G, et al. Deep learning face attributes in the wild[C]//2015 IEEE International Conference on Computer Vision(ICCV). Santiago: IEEE, 2016: 3730-3738.

KARRAS T, AILA T M, LAINE S, et al. Progressive growing of GANs for improved quality, stability, and variation[EB/OL]. (2018-02-26)[2023-05-25]. https://arxiv.org/abs/1710.10196https://arxiv.org/abs/1710.10196.

KARRAS T, LAINE S, AILA T M. A style-based generator architecture for generative adversarial networks[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Long Beach: IEEE, 2020: 4396-4405.

TYLECEK R, ŠÁRA R. Spatial pattern templates for recognition of objects with regular structure[C]//German Conference 35th on Pattern Recognition. Berlin, Heidelberg: Springer, 2013: 364-374.

CIMPOI M, MAJI S, KOKKINOS I, et al. Describing textures in the wild[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Columbus: IEEE, 2014: 3606-3613.

KRAUSE J, STARK M, JIA D, et al. 3D object representations for fine-grained categorization[C]//2013 IEEE International Conference on Computer Vision Workshops (ICCVW). Sydney: IEEE, 2014: 554-561.

WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: From error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.

SALIMANS T, GOODFELLOW I, ZAREMBA W, et al. Improved techniques for training GANs[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems. New York: ACM, 2016: 2234-2242.

HEUSEL M, RAMSAUER H, UNTERTHINER T, et al. GANs trained by a two time-scale update rule converge to a local Nash equilibrium[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6629-6640.

LOSSON O, MACAIRE L, YANG Y. Comparison of color demosaicing methods[M]//Advances in Imaging and Electron Physics. Amsterdam: Elsevier, 2010, 162: 173-265.

HACCIUS C, HERFET T. Computer vision performance and image quality metrics: A reciprocal relation[C]//Computer Science & Information Technology (CS & IT). Florence: Academy & Industry Research Collaboration Center (AIRCC), 2017: 27-37.

ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 586-595.



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3